skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Mok, Ricky"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Since the exhaustion of unallocated IP addresses at the Internet Assigned Numbers Authority (IANA), a market for IPv4 addresses has emerged. In complement to purchasing address space, leasing IP addresses is becoming increasingly popular. Leasing provides a cost-effective alternative for organizations that seek to scale up without a high upfront investment. However, malicious actors also benefit from leasing as it enables them to rapidly cycle through different addresses, circumventing security measures such as IP blocklisting. We explore the emerging IP leasing market and its implications for Internet security. We examine leasing market data, leveraging blocklists as an indirect measure of involvement in various forms of network abuse. In February 2025, leased prefixes were 2.89× more likely to be flagged by blocklists compared to non-leased prefixes. This result raises questions about whether the IP leasing market should be subject to closer scrutiny. 
    more » « less
    Free, publicly-accessible full text available June 10, 2026
  2. Free, publicly-accessible full text available March 7, 2026
  3. Network Telescopes, often referred to as darknets, capture unsolicited traffic directed toward advertised but unused IP spaces, enabling researchers and operators to monitor malicious, Internet-wide network phenomena such as vulnerability scanning, botnet propagation, and DoS backscatter. Detecting these events, however,has become increasingly challenging due to the growing traffic volumes that telescopes receive. To address this, we introduce DarkSim,a novel analytic framework that utilizes Dynamic Time Warping to measure similarities within the high-dimensional time series of network traffic. DarkSim combines traditional raw packet processing with statistical approaches, identifying traffic anomalies and enabling rapid time-to-insight. We evaluate our framework against DarkGLASSO, an existing method based on the GraphicalLASSO algorithm, using data from the UCSD Network Telescope.Based on our manually classified detections, DarkSim showcased perfect precision and an overlap of up to 91% of DarkGLASSO’s detections in contrast to DarkGLASSO’s maximum of 73.3% precision and detection overlap of 37.5% with the former. We further demonstrate DarkSim’s capability to detect two real-world events in our case studies: (1) an increase in scanning activities surrounding CVE public disclosures, and (2) shifts in country and network-level scanning patterns that indicate aggressive scanning. DarkSim provides a detailed and interpretable analysis framework for time-series anomalies, representing a new contribution to network security analytics. 
    more » « less
    Free, publicly-accessible full text available November 4, 2025
  4. Motivated by the impressive but diffuse scope of DDoS research and reporting, we undertake a multistakeholder (joint industry-academic) analysis to seek convergence across the best available macroscopic views of the relative trends in two dominant classes of attacks – direct-path attacks and reflection-amplification attacks. We first analyze 24 industry reports to extract trends and (in)consistencies across observations by commercial stakeholders in 2022. We then analyze ten data sets spanning industry and academic sources, across four years (2019-2023), to find and explain discrepancies based on data sources, vantage points, methods, and parameters. Our method includes a new approach: we share an aggregated list of DDoS targets with industry players who return the results of joining this list with their proprietary data sources to reveal gaps in visibility of the academic data sources. We use academic data sources to explore an industry-reported relative drop in spoofed reflection-amplification attacks in 2021-2022. Our study illustrates the value, but also the challenge, in independent validation of security-related properties of Internet infrastructure. Finally, we reflect on opportunities to facilitate greater common understanding of the DDoS landscape. We hope our results inform not only future academic and industry pursuits but also emerging policy efforts to reduce systemic Internet security vulnerabilities. 
    more » « less
    Free, publicly-accessible full text available November 4, 2025
  5. PacketLab is a recently proposed model for accessing remote vantage points. The core design is for the vantage points to export low-level network operations that measurement researchers could rely on to construct more complex measurements. Motivating the model is the assumption that such an approach can overcome persistent challenges such as the operational cost and security concerns of vantage point sharing that researchers face in launching distributed active Internet measurement experiments. However, the limitations imposed by the core design merit a deeper analysis of the applicability of such model to real-world measurements of interest. We undertook this analysis based on a survey of recent Internet measurement studies, followed by an empirical comparison of PacketLab-based versus native implementations of common measurement methods. We showed that for several canonical measurement types common in past studies, PacketLab yielded similar results to native versions of the same measurements. Our results suggest that PacketLab could help reproduce or extend around 16.4% (28 out of 171) of all surveyed studies and accommodate a variety of measurements from latency, throughput, network path, to non-timing data. 
    more » « less
  6. We investigate a novel approach to the use of jitter to infer network congestion using data collected by probes in access networks. We discovered a set of features in jitter and jitter dispersion —a jitter-derived time series we define in this paper—time series that are characteristic of periods of congestion. We leverage these concepts to create a jitter-based congestion inference framework that we call Jitterbug. We apply Jitterbug’s capabilities to a wide range of traffic scenarios and discover that Jitterbug can correctly identify both recurrent and one-off congestion events. We validate Jitterbug inferences against state-of-the-art autocorrelation-based inferences of recurrent congestion. We find that the two approaches have strong congruity in their inferences, but Jitterbug holds promise for detecting one-off as well as recurrent congestion. We identify several future directions for this research including leveraging ML/AI techniques to optimize performance and accuracy of this approach in operational settings. 
    more » « less
  7. Web-based speed tests are popular among end-users for measuring their network performance. Thousands of measurement servers have been deployed in diverse geographical and network locations to serve users worldwide. However, most speed tests have opaque methodologies, which makes it difficult for researchers to interpret their highly aggregated test results, let alone leverage them for various studies. In this paper, we propose WebTestKit, a unified and configurable framework for facilitating automatic test execution and cross-layer analysis of test results for five major web-based speed test platforms. Capturing only packet headers of traffic traces, WebTestKit performs in-depth analysis by carefully extracting HTTP and timing information from test runs. Our testbed experiments showed WebTestKit is lightweight and accurate in interpreting encrypted measurement traffic. We applied WebTestKit to compare the use of HTTP requests across speed tests and investigate the root causes for impeding the accuracy of latency measurements, which play a vital role in test server selection and throughput estimation. 
    more » « less
  8. Public cloud platforms are vital in supporting online applications for remote learning and telecommuting during the COVID-19 pandemic. The network performance between cloud regions and access networks directly impacts application performance and users' quality of experience (QoE). However, the location and network connectivity of vantage points often limits the visibility of edge-based measurement platforms (e.g., RIPE Atlas). We designed and implemented the CLoud-based Applications Speed Platform (CLASP) to measure performance to various networks from virtual machines in cloud regions with speed test servers that have been widely deployed on the Internet. In our five-month longitudinal measurements in Google Cloud Platform (GCP), we found that 30-70% of ISPs we measured showed severe throughput degradation from the peak throughput of the day. 
    more » « less